March 2026
Author: Andrew Castro
HDS is a full-stack LLM claim-verification application that analyzes generated text, breaks it into
structured factual claims, retrieves supporting evidence from the Wikipedia / MediaWiki API, and
returns explainable per-claim verification results.
The system combines a Python + FastAPI backend with a TypeScript + Next.js frontend to
demonstrate NLP pipeline design, retrieval-aware verification, contradiction-aware scoring, and explainable
AI product UX across separately deployed frontend and backend services.
Visit the deployed React / Next.js frontend to analyze LLM output, inspect claim-level retrieval, and view explainable verification metadata in the production UI.
Open Live ApplicationReview the full repository, backend services, frontend interface, release notes, and deployment configuration for the current HDS architecture.
View GitHub RepositoryUseful for inspecting LLM output from chatbots and identifying where support is strong, weak, contradictory, or missing.
Shows how to present model-evaluation decisions transparently through metadata, retrieval paths, and evidence snippets instead of black-box scoring.
Demonstrates backend architecture, retrieval logic, NLP preprocessing, modern frontend delivery, and cloud deployment in one integrated project.
spaCy parsing and rule-based filters convert multi-sentence LLM text into factual, verifiable claim candidates.
Pronouns and dependent references are rewritten into retrieval-ready claims when context confidence is high enough.
Wikipedia pages are selected using subject-first queries, fallback claim search, title checks, and page extract cleanup.
Chunk comparison, semantic scoring, contradiction checks, and numeric/date matching generate explainable claim outcomes.
HDS intentionally prioritizes inspectability over overconfident scoring. That means the UI exposes retrieval status, grounding state, subject queries, fallback search queries, contradiction reasons, source pages, and evidence text so users can understand why a result was produced. This was a major design goal because weak retrieval can easily create false positives if users only see a label and not the retrieval path behind it.
The project still has important limitations: semantic matching can over- or under-estimate support, Wikipedia is only one evidence source, and contradiction logic is still heuristic rather than full natural-language inference. Even so, the current beta version is a substantial step toward a more transparent claim-verification workflow and a stronger demonstration of full-stack software engineering.
HDS is a portfolio project focused on the engineering challenges behind verifying LLM-generated claims, not on pretending to be a perfect fact-checking engine. It demonstrates how to combine NLP preprocessing, retrieval, contradiction-aware scoring, typed API design, frontend explainability, and cloud deployment into a cohesive product. From a recruiter or engineering-review perspective, the value of the project is in the system design, debugging transparency, and end-to-end full-stack implementation rather than in claiming 100% factual accuracy.